An AI-driven emotion engine that listens, understands, and elevates your vibe with the perfect tune."
Y.M.I.R (Yielding Music For Internal Restoration) is an intelligent emotion-driven AI system that analyzes your facial expressions, voice tone, and text interactions in real-time to decode your emotions — then curates a music experience tailored to your current vibe. Whether you're smiling, sulking, or somewhere in between, Y.M.I.R gets you — and plays exactly what your soul needs to hear.
During the hackathon, we transformed our idea into a fully functional AI-powered emotion recognition and music recommendation system — Y.M.I.R Here's what we accomplished during this time: 1.)Designed and developed the complete backend for real-time multimodal emotion detection (facial expressions + text sentiment). 2.) Integrated a smart chatbot GROQ API that analyzes text-based emotions and interacts naturally with users. 3.) Implemented facial emotion recognition using Deep Learning frameworks (like DeepFace), running directly in the browser. 4.) Fused multimodal emotion analysis to determine the dominant emotion based on both face and text. 5.)Built a music recommendation engine that maps detected emotions to curated songs using content-based filtering. 6.) Logged emotional data in JSON format and created analytics to track mood over time. Developed a clean, interactive web interface that brings together the camera module, chatbot, and music player. Added advanced UI/UX features like emotion-based animations, floating mini-players, and real-time feedback. In short — we built a fully running EmotionAI ecosystem from scratch, live, during the hackathon — combining AI, music, mood, and good vibes 🎧✨